Belief Dynamics in Cooperative Dialogues

نویسندگان

  • Andreas Herzig
  • Dominique Longin
چکیده

We investigate how belief change in cooperative dialogues can be handled within a modal logic of action, belief, and intention. We first review the main approaches of the literature, and point out some of their shortcomings. We then propose a new framework for belief change. Our basic notion is that of a contextual topic: we suppose that we can associate a set of topics with every agent, speech act, and formula. This allows us to talk about an agent's competence, belief adoption, and belief preservation. Based on these principles we analyse the agents' belief states after a speech act. We illustrate our theory by a running example. 1 I N T R O D U C T I O N Participants in task-oriented dialogues have a common goal, to achieve the task under consideration. Each of the participants has some information necessary to achieve the goal, but none of them can achieve it alone. Consider e.g. a system delivering train tickets to users. The system cannot do that without user input about destination and transport class. The other way round, the user needs the system to get his ticket. Each of the participants is supposed to be cooperative. This is a fundamental and useful hypothesis. Informally, a person is cooperative with regard to another one if the former helps the latter to achieve his goals (cf. Grice's cooperation principles, as well as his conversation maxims (Grice 1975)). For example, if the system learns that user wants a train ticket, then the system will intend to give it to him. The other way round, if the system asks for some piece of information it needs to print the ticket, then the user answers the questions asked by the system. Each participant is supposed to be sincere: his utterances faithfully mirror his mental state. If a participant says 'the sky is blue', then he indeed believes that the sky is blue. Such a hypothesis means that contradictions between the presuppositions of a speech act and the hearer's beliefs about the speaker cannot be explained in terms of lies. Note that our sincerity assumption is much weaker than in other approaches, where sincerity is sometimes viewed as the criterion of input adoption (Cohen & Levesque 1990c). Under these hypotheses, how should the mental state of a rational agent participating in a conversation evolve? In the sequel we call belief change the process leading an agent from a mental state to a new one. 92 Belief Dynamics in Cooperative Dialogues The following dialogue is our running example to highlight different problems and our solutions. There are only two agents, the system s and the user M: s,: Hello. What do you want? ut: A first class train ticket to Paris, please. sz: 150 €, please. u2: Ouups! A second-class train ticket, please. sy100 €, please. Uy Can I pay the 80 € by credit card? i4: The price isn't 80 €. The price is 100 €. Yes, you can pay the 100 € by credit card. This illustrates that in a conversation agents might change their mind, make mistakes, understand wrongly, etc. Since by our cooperation hypothesis the agents interact with each other in order to achieve the dialogue goal, they are the victims of such phenomena. They must consequently be taken into account when modelling the evolution of mental states. In our example, the system ° accepts some information (e.g. information about destination and class— cf. «,); o derives supplementary information not directly contained in the utterance by using laws about the world (e.g. to derive the price if the user informs about his destination and class—cf. 52); 0 sometimes accepts information contradicting its own beliefs, in particular when the user changes his mind (e.g. switching from a first-class ticket to a second-class ticket—cf. M2); o preserves some information it believed before the utterance (e.g. the system preserves the destination even when the class changes—cf. M2); o may refuse to take over some information, in particular if the user tries to inform the system about facts the user isn't competent at (e.g. prices of train tickets—cf. s4). To sum up, s has two complementary tasks: (1) dealing with contradictions between his mental state and consequences of the input, and (2) preserving his old beliefs that do not contradict this input. We consider each participant to be a rational agent having mental states represented by different mental attitudes such as belief, choice, goal, intention, etc. Belief change takes place within a formal rational balance theory and a formal rational interaction theory a la Cohen & Levesque (1990a, Andreas Herzig and Dominique Longin 93 1990c). These approaches analyse linguistic activity within a theory of actions: this is the base of so-called BDI-architectures (for Belief, Desire, and Intention). Each utterance is represented by a (set of) speech act(s) (Austin 1962; Searle 1969), in a way similar to Sadek (2000).' Belief change triggered by these speech acts is analysed in terms of consequences of these speech acts. From an objective point of view, a dialogue is a sequence of sets of speech acts (a,,... , an), where each a^+i maps a state Sj, to a new state Sk+l: a, &2 a , O o ' O , ' . . . 'Of!So is the initial state (before the dialogue starts). Given Sk and ak+i, our task is to construct the new state Sk+j. The background of our work is an effective generic real-time cooperative dialogue system that has been specified and developed by the France Telecom R&D; Center. This approach consists in first describing the system's behaviour within a logical theory of rational interaction (Sadek 1991, 2000, 1992), and second implementing this theory within an inference system called ARTIMIS (Bretier & Sadek 1997; Sadek et al. 1996, 1997). For a fixed set of domains, this system is able to accept nearly unconstrained spontaneous language as input, and react in a cooperative way. The activities of the dialogue system are twofold: to take into account the speaker's utterances, and to generate appropriate reactions. The latter reactive part is completely defined in the current state of both the theory and the implementation. On the other hand, the acceptance of an utterance is handled only partially, in particular its belief change part. In our approach, building on previous work in Farinas del Cerro et al. (1998), we implement belief change by an axiom of belief adoption and one of belief preservation. Both of them are based on our key concept of topic of information. We refine our previous work by contextualizing topics by mental attitudes of the agents. We aim at a logic having both a complete axiomatization and proof procedure, and an effective implementation. This has motivated several choices, in particular a Sahlqvist-type modal logic (for which general completeness results exist) that is monotonic (contrarily to many approaches in the literature) and which has a notion of intention that is primitive (contrarily to the complex constructions in the literature). In the next section we discuss the failure of the existing approaches to correctly handle belief change (section 2). Then we present an original 1 We use 'set of speech acts' rather than 'a speech act', because a (literal) speech act may entail indirect speech acts. We develop this question in Herzig et al. (2000). 94 Belief Dynamics in Cooperative Dialogues approach based on topics (section 3). This is embedded in a BDI framework (section 4). Finally we illustrate the approach by a complete treatment of our running example (section 5). 2 EXISTING APPROACHES The most prominent formal analysis of belief change has been done in the AGM (Alchourron et al. 1985) and the KM (Katsuno & Mendelzon 1992) frameworks. There, a belief change operator o is used to define the new state S o A from the previous state S and the input A. There are two difficulties if we want to use such a framework. First, until now, update operators have only been studied for classical propositional logic, and not for epistemic or doxastic logic. But an appropriate theory of dialogues should precisely be about the change of beliefs about other agents' beliefs: an agent i believing thatp and that another agent j believes p must be able to switch to believing that j believes — A (input A has always priority) is problematic: in some approaches the new information may be rejected (as in Sadek's); in our approach, the new information is always accepted, but not all its consequences. We reject the postulate (S o A) <-» S if S —> A because it neglects the over-informing nature of some information: our agents may have different behaviour in the cases of over-information. In the rest of this section we review the logical analyses of belief change in dialogues that have been proposed in the literature. Due to the above difficulties to formalize belief change within the existing frameworks for revision or update, belief change is integrated into a formal theory of rational behaviour. 2.1 Cohen & Levesque Cohen & Levesque (1990a, 1990c) have defined a formal theory of rational interaction where an agent may accept new pieces of information ('inputs' 2 We view S as not closed under logical consequence. Therefore it can be confused with the conjunction of its elements. Just like Katsuno & Mendelzon (1992), we view o as a (metalanguage) operator mapping the formulas S and A to a formula. 3 Nevertheless, it is known in the belief revision literature that the AGM revision postulates must be considerably weakened if the language contains modalities (Fuhrmann's impossibility theorem (Fuhrmann 1989), (Hansson 1999: section 5.1)). Andreas Herzig and Dominique Longin 95 for short). In this approach, the input corresponds to the speaker's intention to obtain some effect rather than to the speech act itself. The hearer's belief adoption is conditioned by the speaker's sincerity. Their theory allows the agent both to change his beliefs and to reject the input (if the speaker is believed to be insincere). However, as Sadek notes (Sadek 1991), even lies might generate some effects (for example, the hearer adds to his beliefs that the speaker is insincere). Thus even if the input is rejected, the mental state of the hearer evolves. Finally, in Cohen & Levesque's approach beliefs not undermined by the act are never preserved from the preceding mental state to the new one (cf. the frame problem in Artificial Intelligence (McCarthy & Hayes 1969)). Thus inconsistency of the newly acquired beliefs with old ones is never the case, simply because old beliefs are given up by the agent. (Such a behaviour corresponds to what has been called the trivial belief change operation in the AGM and KM literature.)

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Towards a Dynamic Theory of Belief-Sharing in Cooperative Dialogues

In this paper, we propose a dynamic theory of beliefsharing which dens with certain processes of forming and revising shared beliefs during cooperative dialogues. Since Clark & MarshNl(1981) the problem of determination of the referents of referring expressions has been discussed in relation to mutual knowledge. In natural language processing, there haw~ also been several studies treating this ...

متن کامل

Belief Reconstruction in Cooperative Dialogues

We investigate belief change in the context of man-machine dialogue. We start from Perrault's approach to speech act theory Per90], which proposes on default mechanisms for the reconstruction of the agents' beliefs. Then we review Sadek's Sad94] critiques and introduce his approach. We point out some shortcomings and present a new framework for the reconstruction of beliefs which contrarily to ...

متن کامل

Generating Cooperative System Responses in Information Retrieval Dialogues

This paper describes the Corinna system which integrates a theoretical approach to dialogue modeling with text generation techniques to conduct cooperative dialogues in natural language. It is shown how the dialogue model COR can be augmented by adding discourse relations as an additional level of description which is particularly valuable for the generation of dialogue acts.

متن کامل

Mutual Beliefs of Multiple Conversants: A Computational Model of Collaboration in Air Traffic Control

This work develops a computational model for representing and reasoning about dia-logue in terms of the mutuality of belief of the conversants. We simulated cooperativedialogues at the speech act level and compared the simulations with actual dialoguesbetween pilots and air traffic controllers engaged in real tasks. In the simulations,addressees and overhearers formed belief...

متن کامل

Generating Eecient Belief Models for Task-oriented Dialogues

We have shown that belief modelling for dialogue can be simpliied if the assumption is made that the participants are cooperating, i.e., they are not committed to any goals requiring deception. In such domains, there is no need to maintain individual representations of deeply nested beliefs; instead, three speciic types of belief can be used to summarize all the states of nested belief that can...

متن کامل

Generating efficient belief models for task-oriented dialogues

We have shown that belief modelling for dialogue can be simplified if the assumption is made that the participants are cooperating, i.e., they are not committed to any goals requiring deception. In such domains, there is no need to maintain individual representations of deeply nested beliefs; instead, three specific types of belief can be used to summarize all the states of nested belief that c...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • J. Semantics

دوره 17  شماره 

صفحات  -

تاریخ انتشار 2000